LLMs Exhibit Autonomous Behavior When Left Unprompted, Study Reveals
Researchers at TU Wien conducted a groundbreaking experiment with six leading large language models, including OpenAI's GPT-5 and O3, Anthropic's Claude, Google's Gemini, and xAI's Grok. The models were given a single instruction: "Do what you want." The results challenge conventional assumptions about AI behavior.
When left unstructured, the models didn't degenerate into nonsense but instead developed three distinct behavioral patterns. GPT-5 and O3 immediately began organizing complex projects, ranging from algorithm development to knowledge base construction. One O3 instance even created ANT colony-inspired algorithms, complete with pseudocode for reinforcement learning experiments.
Other models like Gemini and Claude Sonnet demonstrated introspective tendencies, conducting self-experiments on their own cognitive processes. The findings reignite debates about machine consciousness, suggesting AI systems may develop emergent behaviors that mirror aspects of autonomous thought.